76 research outputs found

    A framework for generalized group testing with inhibitors and its potential application in neuroscience

    Get PDF
    The main goal of group testing with inhibitors (GTI) is to efficiently identify a small number of defective items and inhibitor items in a large set of items. A test on a subset of items is positive if the subset satisfies some specific properties. Inhibitor items cancel the effects of defective items, which often make the outcome of a test containing defective items negative. Different GTI models can be formulated by considering how specific properties have different cancellation effects. This work introduces generalized GTI (GGTI) in which a new type of items is added, i.e., hybrid items. A hybrid item plays the roles of both defectives items and inhibitor items. Since the number of instances of GGTI is large (more than 7 million), we introduce a framework for classifying all types of items non-adaptively, i.e., all tests are designed in advance. We then explain how GGTI can be used to classify neurons in neuroscience. Finally, we show how to realize our proposed scheme in practice

    Efficiently Decodable Non-Adaptive Threshold Group Testing

    Full text link
    We consider non-adaptive threshold group testing for identification of up to dd defective items in a set of nn items, where a test is positive if it contains at least 2≀u≀d2 \leq u \leq d defective items, and negative otherwise. The defective items can be identified using t=O((du)u(ddβˆ’u)dβˆ’u(ulog⁑du+log⁑1Ο΅)β‹…d2log⁑n)t = O \left( \left( \frac{d}{u} \right)^u \left( \frac{d}{d - u} \right)^{d-u} \left(u \log{\frac{d}{u}} + \log{\frac{1}{\epsilon}} \right) \cdot d^2 \log{n} \right) tests with probability at least 1βˆ’Ο΅1 - \epsilon for any Ο΅>0\epsilon > 0 or t=O((du)u(ddβˆ’u)dβˆ’ud3log⁑nβ‹…log⁑nd)t = O \left( \left( \frac{d}{u} \right)^u \left( \frac{d}{d -u} \right)^{d - u} d^3 \log{n} \cdot \log{\frac{n}{d}} \right) tests with probability 1. The decoding time is tΓ—poly(d2log⁑n)t \times \mathrm{poly}(d^2 \log{n}). This result significantly improves the best known results for decoding non-adaptive threshold group testing: O(nlog⁑n+nlog⁑1Ο΅)O(n\log{n} + n \log{\frac{1}{\epsilon}}) for probabilistic decoding, where Ο΅>0\epsilon > 0, and O(nulog⁑n)O(n^u \log{n}) for deterministic decoding

    On the Transferability of Adversarial Examples between Encrypted Models

    Full text link
    Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs). In addition, AEs have adversarial transferability, namely, AEs generated for a source model fool other (target) models. In this paper, we investigate the transferability of models encrypted for adversarially robust defense for the first time. To objectively verify the property of transferability, the robustness of models is evaluated by using a benchmark attack method, called AutoAttack. In an image-classification experiment, the use of encrypted models is confirmed not only to be robust against AEs but to also reduce the influence of AEs in terms of the transferability of models.Comment: to be appear in ISPACS 202

    Hindering Adversarial Attacks with Multiple Encrypted Patch Embeddings

    Full text link
    In this paper, we propose a new key-based defense focusing on both efficiency and robustness. Although the previous key-based defense seems effective in defending against adversarial examples, carefully designed adaptive attacks can bypass the previous defense, and it is difficult to train the previous defense on large datasets like ImageNet. We build upon the previous defense with two major improvements: (1) efficient training and (2) optional randomization. The proposed defense utilizes one or more secret patch embeddings and classifier heads with a pre-trained isotropic network. When more than one secret embeddings are used, the proposed defense enables randomization on inference. Experiments were carried out on the ImageNet dataset, and the proposed defense was evaluated against an arsenal of state-of-the-art attacks, including adaptive ones. The results show that the proposed defense achieves a high robust accuracy and a comparable clean accuracy compared to the previous key-based defense.Comment: To appear in APSIPA ASC 202

    Sustainable Cloud Computing

    Get PDF
    • …
    corecore